Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Aditya Sharma A, Mohammed Usman, HR Goutham, Sagar S, Dr Sharath Kumar YH, Dr Chethan YD
DOI Link: https://doi.org/10.22214/ijraset.2022.45820
Certificate: View Certificate
Robots and machines in Industry 4.0 must be able to operate efficiently and autonomously. They should work in a way that makes the work efficient and substantially error-free. A key component of Industry 4.0, additive manufacturing, or 3D printing, is bringing forth a wave of innovation in daily life by creating everything from toys to furniture to screws. The enthusiast typically uses 3D printing. To create 3D objects for private or commercial usage, various printing techniques have been developed. These techniques include fused deposition modelling (FDM), stereolithography (SLA), digital light processing (DLP), and selective laser sintering (SLS). Despite the enormous potential for AM techniques to produce unique parts on demand and with little material waste, 3D printing is not typically used in mass production. This is due to two reasons. The cost of printing is too high, and since 3D printing is typically a labour-intensive process, it would not be practical to implement it in a mass production setting. Secondly, there is no quality assurance process other than manual inspection. By applying Deep Learning to automate the QC control component of 3D printing, we hope to solve the second issue. Our research demonstrates an in-situ monitoring strategy without causing damage that can find surface flaws in 3D-printed products. We suggest a deep learning-based method in this work to assess the quality of a 3D-printed object. Deep Convolution Neural Network and Deep Convolution Neural Network with Random Forest algorithm are the first two approaches used to do this.
I. INTRODUCTION
In recent years, additive manufacturing has developed into a practical production method. Because additive manufacturing, often known as 3D printing, can produce intricate geometrical patterns with the use of design software, it opens up more production alternatives. Many 3D printers lack a dedicated method to track and keep an eye on the Quality Control procedure. The fact that 3D printers will keep printing a surface even after a mistake is made results in the production of surfaces with flaws including discolouration, porosity, cracking, warping, etc. Such flaws consume time, energy, and money, which is one of the main reasons 3D printing hasn't been fully adopted by the manufacturing sector yet and is still mostly employed by academic institutions for research and hobbyists.
Production using additives Despite having advantages of its own, this method is still in its infancy and relies on manual, error-prone quality control for the components it produces. Since the Quality Control procedure for 3D printing is currently not automated, the industry has not fully adopted the technology.
The automation of the QC process in 3D printing provides the answer to the problem statement. Using cutting-edge methods like deep learning, the first step in achieving this is the detection and classification of flaws including rolling, discolouration, and poor finish quality. As a result, the QC process will require fewer humans and be less prone to human error. This will streamline the additive manufacturing process and help the industrial application of this technology progress much further.
II. LITERATURE SURVEY
Nariman Razaviarab, Yaser Banadkai, et al[1]; Machines and robots must be highly automated in smart manufacturing systems (also known as Industry 4.0) to process information, increase production yield, visualise performance in real-time, enable intelligent predictive maintenance systems, and match service providers with customer demands. A key element of the smart manufacturing system that enables flexible configuration and dynamically changing processes to swiftly respond to changes is additive manufacturing.
In this activity, they learn about the numerous forms of image capture, the fundamental framework of the project they aim to produce, and a variety of errors that can occur throughout the additive manufacturing process. Technically, this study is highly comprehensive and provides details on training the CNN, epoch rate accuracy rate, and other factors that affect the reliability of conclusions.
Chenang Liu, Rongxuang (Raphael) Et.al Rongxuang (Raphael), Chenang Liu, et al. [2]; For the quality monitoring and control of additive manufacturing (AM) processes, layer-wise 3D surface morphology data is essential. However, the majority of 3D scan methods now in use need either touch or a lot of time, making it impossible to collect real-time data on 3D surface morphology. Since real-time 3D surface data capture in AM is the goal of this work, supervised deep learning-based image analysis is used to accomplish this goal. The main idea behind this suggested approach is to measure the correlation between a 2D image and a 3D point cloud using a deep learning algorithm called a convolutional neural network (CNN). Both simulated and actual case studies were conducted to verify the efficacy and efficiency of the suggested strategy. The findings show that this technology has a great deal of promise for use in real-time surface morphology measurement in AM and other advanced manufacturing techniques. An in-depth discussion of the 3D processing of images as the integration of planes is provided in this study, along with a general understanding of the algorithm's operation and constraints.
The research for this paper makes considerable use of Deep CNN and discusses how the neural network is trained. In this work, they discover how to measure accuracy using Ra and the projected Ra graph.
In the aerospace and defence sectors, additive manufacturing (AM) technology is regarded as one of the most promising manufacturing techniques. Yao Chen et al. However, it is well known that internal flaws in AM components, such as powder agglomeration, balling, porosity, internal cracks, and thermal/internal stress, can have a substantial impact on the final parts' quality, mechanical performance, and safety. Defect inspection techniques are crucial for minimising produced flaws as well as enhancing the mechanical and surface quality of AM components. This paper explains defect inspection methods and how they are used in additive manufacturing processes. Review of the architecture of flaws in AM processes. Future directions are given as well as a summary of conventional defect detection technology and surface defect detection techniques based on deep learning. This essay goes into great detail regarding the various kinds of flaws and the factors that contribute to them. The report also discusses various methods for gathering data or imaging, including IR waves, eddy currents, ultrasonic, etc.
Look tonner, Et al. [4]; Defects in 3D-printed objects generally occur where the printer's base and the product's surface come into touch. Convolutional neural networks, in particular, have been demonstrated to be one of the most effective machine learning techniques for processing picture data. Supervised deep learning techniques for classification have also seen significant success. However, as noted in the issues above, such methods frequently require substantial volumes of labelled training data, which are frequently unavailable for our problem case. It is particularly challenging for supervised classification algorithms to accurately categorise faults since the class of defects is frequently underrepresented. In this study, the researchers divided the flaws into five categories, and their model identifies a flaw and assigns it to one of the five categories.
Xiling Yao, League Chen, and others[5]; Surface observation is a crucial component of additive manufacturing quality control (AM). To prevent further deterioration of the component quality throughout the AM process, surface flaws must be found early on. This research proposes a quick method for directed energy deposition (DED) surface fault diagnosis. The creation of an in-situ point cloud processing with machine learning techniques that enables autonomous surface monitoring without sensor pauses is the main contribution of this work. Several concurrently running sub-processes carry out the in-situ point cloud processing procedures of filtering, segmentation, surface-to-point distance computation, point clustering, and machine learning feature extraction. Surface flaws are identified and categorised using a combination of supervised and unsupervised machine learning algorithms. An accuracy of 93.15 per cent in surface defect identification is attained using the proposed method, which has been empirically validated. In this process, points are clustered using the dbscan technique, allowing the type of defect to be determined.
The model is created as shown in Fig. 6, and the images are initially gathered by taking photos with a mobile phone camera while maintaining the background black when a white object is used and using a white background when a dark object is utilised. To expand the number of images that will produce higher accuracy and better outcomes, Deep CNN rotates the same image in 90-degree intervals throughout the dataset.
The picture gathering phase entails computing and rotating images of 3D printed items.
In the image processing phase, we work on preparing the images for categorization by subjecting them to filters and other algorithms.
We divide the processed images into two categories, defect and no defect, in the model's prediction section.
A. Image Gathering
In this section, we take pictures of 3D printed things against a background and in white light. The light source is a 5W(581930) Philips Cosmos LED bulb. A white background is used for things that are other colours, and black background is used for objects printed on the light-coloured substrate. The pictures are taken and then put into two folders. flaw and non-flaw This preliminary classification is based on visual examination and intentional or pre-programmed flaws in the printed products. As the used classification algorithms are rotation invariant, a total of 6000 images are taken, classified, and rotated in increments of 90 degrees.
YOLO is a method that provides real-time object detection using neural networks. The popularity of this algorithm is due to its accuracy and quickness. It has been applied in a variety of ways to identify animals, humans, parking metres, and traffic lights. You Only Look Once is known by the acronym YOLO. This algorithm identifies and finds different things in a picture (in real-time). The class probabilities of the discovered photos are provided by the object identification process in YOLO, which is carried out as a regression problem. Convolutional neural networks (CNN) are used by the YOLO method to recognise items instantly. The approach just needs one forward propagation through a neural network to detect objects, as the name would imply. In other words, a single algorithm run is used to anticipate the complete image. The CNN is employed to forecast multiple class probabilities and bounding boxes concurrently.
The following are some reasons why the YOLO algorithm is crucial:
a. Speed: Because this system can forecast objects in real-time, detection times are improved.
b. High Degree of Accuracy: YOLO is a prediction method that yields precise findings with little background error.
c. Excellent Learning Skills: The algorithm's learning capabilities allow it to learn object representations and use them in object detection.
2. YOLO Working
a. The image is first separated into grid cells. B bounding boxes are predicted in each grid cell, along with confidence ratings. To determine the class of each object, the cells forecast the class probability.
b. For instance, a bicycle, a dog, and a car are examples of at least three different types of objects. A single convolutional neural network is used to make all of the predictions concurrently.
c. The projected bounding boxes are equal to the actual boxes of the objects thanks to intersection over the union. This phenomenon gets rid of bounding boxes that aren't necessary or don't match the properties of the objects (like height and width). The final detection will be made up of special bounding boxes that precisely suit the objects.
d. For instance, the bicycle is enclosed by the yellow bounding box, while the car is enclosed by the pink bounding box. The blue bounding box has been used to highlight the dog.
3. Canny Edge Detector
A multi-step method called a Canny edge detector can find the edges in any given image. It requires following the procedures listed below when spotting edges in an image.
a. Using a Gaussian filter, noise in the input image is removed. Edge detection results are quite sensitive to image noise since the underlying mathematics is heavily focused on derivatives (see Step 2: Gradient calculation). Applying Gaussian blur to the image to smooth it out is one technique to get rid of the noise in it. A Gaussian Kernel (3x3, 5x5, 7x7, etc.) is used with the image convolution technique to achieve this. The expected blurring impact affects the kernel size. In general, the blur is less noticeable the smaller the kernel. The following equation describes a Gaussian filter kernel of size (2k+1)(2k+1):
b. Calculating the derivative of the Gaussian filter to get the gradient along the x and y axes of the image's pixels
c. Suppress the non-max edge contributor pixel points by taking into account a group of neighbours for any curve extending in a direction perpendicular to the specified edge.
d. Using the Hysteresis Thresholding approach, ignore pixels below the low threshold value and keep those above the gradient magnitude.
C. Classification
The model uses two algorithms to make predictions Deep CNN and Deep CNN+ Random Forest.
Deep convolutional neural networks, often known as CNNs or DCNNs, are the kind that is most frequently employed to detect patterns in photos and videos. DCNNs, which use a three-dimensional neural pattern inspired by the visual brain of animals, have developed from conventional artificial neural networks. Deep convolutional neural networks are frequently used for natural language processing but are primarily employed for tasks like object detection, image classification, and recommendation systems. We have employed 2 convolutional layers and 4 hidden layers in the DCNN model that we implemented. 50 epochs and 5 steps per epoch were employed in each layer.
2. DEEP CNN+Random Forest.
Classification is a key component of machine learning; we seek to identify the class (also known as a group) to which an observation belongs. For many business applications, like as predicting whether a specific user will purchase a product or predicting whether a specific loan would default or not, the ability to correctly classify observations is quite valuable.
Numerous classification algorithms, including decision trees, logistic regression, support vector machines, and naive Bayes classifiers, are available thanks to data science. But the random forest classifier is near the top of the classifier hierarchy (there is also the random forest regressor but that is a topic for another day).
In this article, we will look at how basic decision trees function, how different decision trees are joined to create a random forest, and why random forests are so effective at their jobs.
We employed 5 convolution layers in the model.
IV. WORKING
Flask is used to create the application’s front end.
A selected image is first uploaded from the device’s storage. Following the display of the item, characteristics are the predictions made by each particular algorithm.
V. RESULT AND CONCLUSION
The dataset used consists of 6000 images out of which 4000 are of objects which are printed using white material and pictures taken with a black background and 2000 images are of objects printed using coloured material and photos taken with a white background .
The train test split and the accuracy scores is as follows:
Train test split |
Accuracy score of Deep CNN(%) |
Accuracy Score of Deep CNN +Random forest(%) |
80 |
80 |
85 |
70 |
73 |
81 |
65 |
74 |
79 |
60 |
70 |
77 |
We can conclude from a comparison of the results that the DeepCNN+Random Forest algorithm provides us with a higher accuracy of about 10 and is the preferred method for classifying the photos.
VI. FUTURE WORK
A different method may be included in the future as part of the proposed effort to provide a more thorough examination and investigation of the classification of surface fault flaws in 3D printed objects. Models ought to be developed to operate in real-time and be very helpful in manufacturing. Future work will involve connecting a camera to the 3D printer so that we can spot flaws instantly.
[1] “Toward Enabling a Reliable Quality Monitoring System for Additive Manufacturing Process using Deep Convolutional Neural Networks” By Yaser Banadkai, Nariman Razaviarab, Hadi Ferkmandi, Safura Sharifi. [2018]. [2] “Real-time 3D Surface Measurement in Additive Manufacturing Using Deep Learning” by Chenang Liu, Rongxuang(Raphael) Wang, Zhenyu (James) Kong, Suresh Babu, Chase Jostlin, James Ferguson. [2019] [3] “Defect inspection technologies for additive manufacturing.” By: Yao Chen et al in 2021 [4] “Anomaly Detection for Visual Quality Control of 3D-Printed Products” By Loek tonner, Jxaiapeng Li, Vladimir Osin, Mike Holderski, Vlado Menskovsky. [2019] [5] “Rapid surface defect identification for additive manufacturing with in-situ point cloud processing and machine learning” By: Lequn Chen, Xiling Yao, Peng Xu, Seung Ki Moon & Guijun Bi. [2020] [6] “A deep learning-based model for defect detection in laser-powder bed fusion using in-situ thermographic monitoring” by Josef Tomes. [2020] [7] “Deep Learning for Texture Classification Via Multiwavelet Fusion of Scattering Transforms” By Amir Dadashialehi, Alireza Bab Hadisahar. [2017].
Copyright © 2022 Aditya Sharma A, Mohammed Usman, HR Goutham, Sagar S, Dr Sharath Kumar YH, Dr Chethan YD. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET45820
Publish Date : 2022-07-20
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here